Learning without concentration for general loss functions
نویسندگان
چکیده
منابع مشابه
Unregularized Online Learning Algorithms with General Loss Functions
In this paper, we consider unregularized online learning algorithms in a Reproducing Kernel Hilbert Spaces (RKHS). Firstly, we derive explicit convergence rates of the unregularized online learning algorithms for classification associated with a general αactivating loss (see Definition 1 below). Our results extend and refine the results in [30] for the least-square loss and the recent result [3...
متن کاملForcasting under General Loss Functions
This paper presents some results for solving prediction problems under general asymmetric loss functions. We prove existence of the optimal predictor and uniqueness under certain additional assumption fulllled for instance by convex prediction error loss. Furthermore we study the question of niteness of the optimal predictor for prediction error loss with saturation.
متن کاملEfficient algorithms for learning kernels from multiple similarity matrices with general convex loss functions
In this paper we consider the problem of learning an n × n kernel matrix from m(≥ 1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present ...
متن کاملOnline Nonparametric Regression with General Loss Functions
This paper establishes minimax rates for online regression with arbitrary classes of functions and general losses.1 We show that below a certain threshold for the complexity of the function class, the minimax rates depend on both the curvature of the loss function and the sequential complexities of the class. Above this threshold, the curvature of the loss does not affect the rates. Furthermore...
متن کاملA Convex Surrogate Operator for General Non-Modular Loss Functions
Empirical risk minimization frequently employs convex surrogates to underlying discrete loss functions in order to achieve computational tractability during optimization. However, classical convex surrogates can only tightly bound modular loss functions, submodular functions or supermodular functions separately while maintaining polynomial time computation. In this work, a novel generic convex ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Probability Theory and Related Fields
سال: 2017
ISSN: 0178-8051,1432-2064
DOI: 10.1007/s00440-017-0784-y